89 research outputs found

    Split rank of triangle and quadrilateral inequalities

    Get PDF
    A simple relaxation of two rows of a simplex tableau is a mixed integer set consisting of two equations with two free integer variables and non-negative continuous variables. Recently Andersen et al. [2] and Cornu´ejols and Margot [13] showed that the facet-defining inequalities of this set are either split cuts or intersection cuts obtained from lattice-free triangles and quadrilaterals. Through a result by Cook et al. [12], it is known that one particular class of facet- defining triangle inequality does not have a finite split rank. In this paper, we show that all other facet-defining triangle and quadrilateral inequalities have finite split rank. The proof is constructive and given a facet-defining triangle or quadrilateral inequality we present an explicit sequence of split inequalities that can be used to generate it.mixed integer programs, split rank, group relaxations

    Integral Farkas type lemmas for systems with equalities and inequalities

    Get PDF
    A central result in the theory of integer optimization states that a system of linear diophantine equations Ax = b has no integral solution if and only if there exists a vector in the dual lattice, y T A integral such that y T b is fractional. We extend this result to systems that both have equations and inequalities {Ax = b, Cx d}. We show that a certificate of integral infeasibility is a linear system with rank(C) variables containing no integral point.

    An algorithm for the separation of two-row cuts

    Full text link
    peer reviewedWe consider the question of finding deep cuts from a model with two rows of the type PI = {(x,s) ∈ Z2 ×Rn+ : x = f +Rs}. To do that, we show how to reduce the complexity of setting up the polar of conv(PI ) from a quadratic number of integer hull computations to a linear number of integer hull computations. Furthermore, we present an algorithm that avoids computing all integer hulls. A polynomial running time is not guaranteed but computational results show that the algorithm runs quickly in practice

    Polyhedral properties for the intersection of two knapsacks

    Full text link
    We address the question to what extent polyhedral knowledge about individual knapsack constraints suffices or lacks to describe the convex hull of the binary solutions to their intersection. It turns out that the sign patterns of the weight vectors are responsible for the types of combinatorial valid inequalities appearing in the description of the convex hull of the intersection. In partic- ular, we introduce the notion of an incomplete set inequality which is based on a combinatorial principle for the intersection of two knapsacks. We outline schemes to compute nontrivial bounds for the strength of such inequalities w.r.t. the intersection of the convex hulls of the initial knapsacks. An extension of the inequalities to the mixed case is also given. This opens up the possibility to use the inequalities in an arbitrary simplex tableau.ADONE

    Split rank of triangle and quadrilateral inequalities

    Full text link
    A simple relaxation of two rows of a simplex tableau is a mixed integer set consisting of two equations with two free integer variables and non-negative continuous variables. Recently Andersen, Louveaux, Weismantel and Wolsey (2007) and Cornuejols and Margot (2008) showed that the facet-defining inequalities of this set are either split cuts or intersection cuts obtained from lattice-free triangles and quadrilaterals. Through a result by Cook, Kannan and Schrijver (1990), it is known that one particular class of facet-defining triangle inequality does not have a finite split rank. In this paper, we show that all other facet-defining triangle and quadrilateral inequalities have a finite split-rank. The proof is constructive and given a facet-defining triangle or quadrilateral inequality we present an explicit sequence of split inequalities that can be used to generate it.Comment: 39 pages and 13 figure

    Min Max Generalization for Two-stage Deterministic Batch Mode Reinforcement Learning: Relaxation Schemes

    Full text link
    We study the minmax optimization problem introduced in [22] for computing policies for batch mode reinforcement learning in a deterministic setting. First, we show that this problem is NP-hard. In the two-stage case, we provide two relaxation schemes. The first relaxation scheme works by dropping some constraints in order to obtain a problem that is solvable in polynomial time. The second relaxation scheme, based on a Lagrangian relaxation where all constraints are dualized, leads to a conic quadratic programming problem. We also theoretically prove and empirically illustrate that both relaxation schemes provide better results than those given in [22]

    Intermediate integer programming representations using value disjunctions

    Full text link
    We introduce a general technique for creating an extended formulation of a mixed-integer program. We classify the integer variables into blocks, each of which generates a finite set of vector values. The extended formulation is constructed by creating a new binary variable for each generated value. Initial experiments show that the extended formulation can have a more compact complete description than the original formulation. We prove that, using this reformulation technique, the facet description decomposes into one “linking polyhedron” per block and the “aggregated polyhedron”. Each of these polyhedra can be analyzed separately. For the case of identical coefficients in a block, we provide a complete description of the linking polyhedron and a polynomial-time separation algorithm. Applied to the knapsack with a fixed number of distinct coefficients, this theorem provides a complete description in an extended space with a polynomial number of variables. On the basis of this theory, we propose a new branching scheme that analyzes the problem structure. It is designed to be applied in those subproblems of hard integer programs where LP-based techniques do not provide good branching decisions. Preliminary computational experiments show that it is successful for some benchmark problems of multi-knapsack type

    Machine Learning to Balance the Load in Parallel Branch-and-Bound

    Full text link
    We describe in this paper a new approach to parallelize branch-and-bound on a certain number of processors. We propose to split the optimization of the original problem into the optimization of several subproblems that can be optimized separately with the goal that the amount of work that each processor carries out is balanced between the processors, while achieving interesting speedups. The main innovation of our approach consists in the use of machine learning to create a function able to estimate the difficulty (number of nodes) of a subproblem of the original problem. We also present a set of features that we developed in order to characterize the encountered subproblems. These features are used as input of the function learned with machine learning in order to estimate the difficulty of a subproblem. The estimates of the numbers of nodes are then used to decide how to partition the original optimization tree into a given number of subproblems, and to decide how to distribute them among the available processors. The experiments that we carry out show that our approach succeeds in balancing the amount of work between the processors, and that interesting speedups can be achieved with little effort

    A Supervised Machine Learning Approach to Variable Branching in Branch-And-Bound

    Full text link
    We present in this paper a new approach that uses supervised machine learning techniques to improve the performances of optimization algorithms in the context of mixed-integer programming (MIP). We focus on the branch-and-bound (B&B) algorithm, which is the traditional algorithm used to solve MIP problems. In B&B, variable branching is the key component that most conditions the efficiency of the optimization. Good branching strategies exist but are computationally expensive and usually hinder the optimization rather than improving it. Our approach consists in imitating the decisions taken by a supposedly good branching strategy, strong branching in our case, with a fast approximation. To this end, we develop a set of features describing the state of the ongoing optimization and show how supervised machine learning can be used to approximate the desired branching strategy. The approximated function is created by a supervised machine learning algorithm from a set of observed branching decisions taken by the target strategy. The experiments performed on randomly generated and standard benchmark (MIPLIB) problems show promising results
    corecore